Smart Wearables #
Made together with Sara Kutkova.
Introduction #
In this project, the aim was to make a prototype of a gesture-recognizing glove. We had access to piezo-electric material, conductive and resistive yarn and different types of conductive fabric fabric. The task was to use the given materials and machine learning algorithms to detect the performed gestures.
We started by defining a goal for our project, which was to recognize the hand gestures of Chinese numbers.

The other goal we set for ourselves was that we wanted our glove itself to be very simple. All the parts should be modular and added to the glove independently. However, we still strived for excellence and wanted the best results from the recognition algorithm, which affected the design of the sensors. So the glove should be standalone, soft, wireless, and include good-performing modular sensors.
Sensor design #
Before finding the design that worked for us, we went through a few iterations of unsuccessful attempts.

We started with a round of conductive fabric on a piezo-electric layer, and the plan was to either attach this to the glove by sewing, or lamination. The sewing did not work out, the hand sewing was too loose, and the machine sewing was too tight (which resulted in minimal resistance, making it impossible to detect the bending).

We went for the separate/modular sensor design in the end. The first attempt was unsuccessful because we used non-stretchable fabric. This made the sensor curve in the resting state (which caused some tension in the sensor and resulted in lower resistance) so it was hard to detect the change between the bending state and the resting state.

The modular sensor we ended up using was made from stretchable fabric (both the conductive and non-conductive one). The tip we received from one of the teaching assistants was that the sensor should be a little bit stretched even in the resting state. In our sensor, we used one layer of Piezo-electric material that was between two conductive stretchable fabrics, and this all was laminated onto stretchable non-conductive fabric and attached to the glove using press studs.

We also used a simple touch sensor made from the conductive fabric. This type of sensor is very precise, and can reliably detect which fingers touch together, by detecting a closed circuit when the fingers touch.

In the end, we implemented both sensors into our glove to give us more precise results. We feed the output from the bend sensors into the machine-learning algorithm to detect the gestures and the output from the touch sensor to manage the interface.

After the design was finalized, we made all four sensors and a series of comprehensive testing sessions were conducted to evaluate the properties and capabilities of the sensors. These tests were crucial in determining the performance and reliability of the sensors under various conditions, particularly in how they respond to different degrees of bending and pressure.
This is the video where we tested the min/max value of the sensor with a multimeter. This video also demonstrated the linear scaling of the sensor, thanks to the built-in visualizer in the multimeter. The results of our tests were quite promising. We found that the sensors are consistent in their readings, at their maximum, the sensors registered resistance values ranging from 1,000 to 3,000 ohms. When pressure is applied, such as during finger bending, the resistance dramatically decreases, going down to below 50 ohms. Moreover, the sensors demonstrated excellent linearity in their scaling, particularly noticeable in the range between 1,000 and 100 ohms. This linear scaling is especially beneficial for detecting gestures that involve partially bent finger positions. These properties create high-quality features that are critical for our machine learning algorithm to detect different gestures.
Glove design #
We wanted the glove to be minimalistic (almost pretty), so the aim was to hide all the electronic components and go with soft materials. We wanted to detect the hand gestures of Chinese numbers, therefore we decided to try to detect the state the hand is in using the bend sensors. We started with unpicking the glove, then machine sewing the traces and attaching the conductive press studs for the bending sensors.
The first version of the glove included the touch sensors only. The second version included the bend sensors as well. In the second version, we faced “wiring issues” as there were four bend sensors, all needed to be connected to the analogue input pin and ground, and three touch sensors. In total eight press studs on the bottom of the glove had to be connected somehow to the ESP32 board. This meant that some of the wiring had to go through the front of the glove as well.


The important part of the glove design was also a box for the electronics attached by a wristband. We tried to make the box as small as possible, with the option to have the lid open or closed. However, as there was the custom-made PCB, ESP32 board and battery, it was not possible to make it more compact.

If we would develop the glove further, we could use a soft silicone box with rounded edges, which would improve the overall wearability. The cost efficiency was not the first thing on our mind when making this glove, it was probably the precision and accuracy. However, the design we made - mainly the modularity - is achieving that. If any part of the glove breaks, we don’t need to make everything again and use up a lot of materials, but we can just replace the broken part (for example a single sensor, or the microcontroller). We also used an ESP32 board which is much cheaper than a regular Arduino.
Software design #
Computer client #
The goal of the software design is to provide an efficient and intuitive way to recognize and interpret hand gestures. In this way, the software serves two primary functions: data gathering and gesture detection.
The first function of the software is data acquisition. Users perform gestures using the glove and then select the corresponding gesture via a user interface. Upon triggering, this initiates a process where the software sends a request to the glove’s onboard system via a WiFi connection. The system then reads, processes, and stores the gesture data in separate files, categorized by the type of gesture performed. This automated categorization eliminates the need for manual labelling, which significantly enhances the efficiency of the data collection process.
With a sufficient dataset collected, the focus shifts to training machine learning models. This part will be discussed in the following section where we expand on how we process the data, choose the model, train them and evaluate the results.
Upon completion of the training phase, the software utilizes these ML models for real-time gesture detection. When a user performs a gesture, call the function which triggers a new sensor reading from the glove. The software processes this input and uses the pre-trained ML model to predict the gesture. The result is then displayed on the interface, providing immediate feedback to the user. Besides the core functionalities, additional features are being developed to support user interaction. A notable addition is with the touch sensor in the glove, we can allow users to navigate through menus with simple touch gestures. Furthermore, an entertaining aspect is introduced by integrating a ‘Dino Running’ game, which can be controlled using the glove. However, it is important to note that the primary gesture detection functionalities are maintained on separate interfaces to ensure clarity and efficiency.
ESP32 server #
In our design, the ESP32 microcontroller functions as a server that efficiently manages data communication with the client. To optimize this interaction, we employed an async HTML request handler. This approach significantly enhances the system’s efficiency in several ways.
Firstly, by utilizing asynchronous communication, the ESP32 is not perpetually engaged in maintaining a connection with the client. Instead, it activates only when necessary to handle incoming requests. This method stands in contrast to a continuous data streaming approach, where the microcontroller would be in a constant state of readiness, leading to higher power consumption.
When a request is received, the ESP32 performs two critical functions: data collection and data transfer. It begins by reading sensor data from the ADC module through the I2C pins. This data is then reformed into a JSON format. The use of JSON offers several advantages. Primarily, it standardizes the data into a universally recognized structure, making it straightforward for the client to receive and process the information.
Hardware design #
The hardware design part of our project aimed for a compact design that handles data collecting and wireless communication functions. Design-wise, this could be separated into two sections, which are the integration of sensors and the connectivity solutions, as well as the PCB design for the microcontrollers.
Connections #
Firstly, as our glove features a variety of sensors, including the bending and the touching ones, each providing vital data for gesture recognition, we need a connection between these sensors and the microcontroller in a detachable way. This connection design includes eight pins in total: four for the bending sensors, three for the touch sensors, and one common VCC for power distribution.
A key aspect of our design is the strategic placement of these pins at the bottom part of the glove, enabling a streamlined connection process. The initial design envisioned the use of a ‘receive fabric,’ where this fabric would act as the female connector for the buttons. The idea was to attach this fabric to the glove, with wires soldered from the fabric’s backside, leading to the PCB that houses the microcontrollers. This approach was intended to make the hardware fully detachable, leaving the glove itself clean and washable. Upon successfully connecting, we could use the box to cover the connection area, which leaves no visible connection on the surface.
However, challenges arose with the failure of signal traces embedded in the glove. This led to a pivot in our design approach. We have to adapt to this by directly soldering jumper wires from the sensors to the PCB. While this workaround deviated from our original clean and detachable design, it proved effective in practice. And despite the deviation, the hardware components, including both the PCB and the sensors remain detachable.
PCB design #
In the development of our smart glove project, a critical component is the PCB, designed to effectively manage sensor data and communication. Our approach to PCB design focused on balancing functionality with the need for a compact, portable form factor, leading to several design iterations and problem-solving.
We chose the Xiao ESP32C3 as our main controller, replacing the standard Arduino. The Xiao ESP32 was selected for its small size, which is essential for a portable device like our Smart glove, without compromising on the necessary features (4 analogue pins + 3 digital pins + wireless connection). Our first PCB version was designed around this board, maintaining simplicity with just the essential connections and resistors needed for the sensors.

However, during testing, we encountered a significant issue. When the Xiao ESP32’s Wi-Fi connectivity was active, we were restricted to using only one of its two ADC channels. This limitation reduced the available analogue pins to three, insufficient for detecting all the required gestures. To overcome this limitation, we revised our PCB design to include an additional module: the ADS1115 chip.

The ADS1115 is a high-precision, 16-bit ADC capable of managing four lanes of analogue input. It communicates with the main controller via the I2C protocol. By integrating the ADS1115 into our PCB, we effectively offloaded the ADC function from the ESP32. This change allowed the ESP32 to focus on handling wireless communication without being burdened by data collection duties.

We eventually succeeded in fitting another module into a similarly sized PCB that is compatible with the original box design. Although routing took some time, we were able to accomplish it. There is potential for the board to be even more compact if we directly use the ESP32 and ADS1115 as chips rather than modules. However, this was a bit beyond our current capabilities, and for the prototype we aimed to create, the current outcome is satisfactory.
Machine learning #
Data preparation #
As we used four pretty much identical sensors and had similar ranges for reading, and with the fact that our circuit design uses a 1k ohms resistor separating the voltage, we got pretty consistent readings across the four sensors, so there’s no need for normalization.
Initially, our training dataset aimed to recognize 11 classes representing the Chinese signs for numbers 0 through 10. For each class, we collected 50 data points. However, during the evaluation phase, we encountered challenges with the algorithm’s ability to accurately distinguish the sign for the number nine. Consequently, we decided to exclude this gesture from our dataset. This revision brought our total training dataset to 500 samples (50 samples for each of the 10 remaining gestures), which, while modest in size, proved sufficient for our purposes and helped avoid overfitting.
To further examine our model, we compiled a validation set that consists of 60 data points for each gesture. Recognizing the potential for bias in the data collection process, we enlisted the help of three friends to contribute to the validation set. Each participant performed each gesture 20 times, thereby introducing a wider range of hand shapes and movement styles into our dataset. This inclusion of diverse data sources was crucial in ensuring the robustness of our gesture recognition model. In line with conventional machine learning practices, we adopted an 8:2 split for dividing the data into training and testing sets. This allocation resulted in 400 data points dedicated to training the model and 100 data points for testing, as well as 600 independent data points for evaluation.
Visualization #
Given that our data samples are four-dimensional vectors, representing readings from four different sensors, direct plotting was not straightforward. To address this, we employed several visualization techniques to gain deeper insights into our data.

One of the primary methods we used was Principal Component Analysis (PCA), which helped us reduce the data’s dimensionality from four to two. This reduction made it feasible to plot the data on a two-dimensional graph. The resulting PCA plot provided valuable insights, particularly regarding the challenges faced by our algorithm in differentiating certain gestures. Notably, we observed some overlap in the data points for the number nine gesture with other classes, suggesting potential difficulties for the algorithm in distinguishing this gesture from others.

Another visualization technique we used was a parallel coordinates graph. This approach allowed us to examine individual features corresponding to each sensor. Through this graph, we could analyze how each sensor (feature) responded to different gestures. For instance, we noticed that the readings for feature one, representing thumb movements, were quite clustered. This indicated that the thumb’s movement across various gestures was relatively limited. In contrast, feature two showed excellent linear scaling, effectively capturing the half-bent state of fingers in gestures like the class seven sign. However, we also detected some anomalous readings in the fourth feature, which required further investigation.
To corroborate our findings, we also utilized a theta grid plot. This method reinforced our previous observations, highlighting the same patterns and anomalies detected in the parallel coordinates graph.
Algorithms and outcomes #
A significant emphasis was placed on selecting the most effective machine learning algorithm to accurately interpret the sensor data for gesture recognition. We experimented with a range of algorithms, to determine the best fit for our dataset and requirements. The primary algorithms we tested were the Support Vector Machine (SVM) with different kernels and the K-nearest neighbours (KNN) algorithm.
We first experimented with SVM, utilizing both linear and polynomial (poly) kernels. SVM is known for its effectiveness in classification tasks, particularly in high-dimensional spaces. We applied two different strategies for multi-class classification with SVM: One-vs-One (OvO) and One-vs-Rest (OvR). The OvO approach involves comparing each class against every other class, while OvR compares each class against all other classes combined. The results were quite revealing. The SVM with a linear kernel using the OvO strategy, showed the best overall performance, with accuracy rates exceeding 80% for nearly all classes. However, it was observed that this approach struggled with class 7 and class 9, both of which involve detecting half-bent finger positions. This suggested a potential area for further refinement in our algorithm, especially for gestures with subtle movements.
Surprisingly, the KNN algorithm, with a setting of K=2, also performed well in our tests. KNN’s effectiveness in our project could be attributed to the nature of our dataset, which is relatively small and sparse. Despite its generally good performance, KNN, like SVM, faced challenges with the more complex gestures involving half-bent fingers.
Overall, we were pleased with the outcomes of our algorithm testing. The careful placement of sensors on the glove and their consistent readings and scaling capabilities contributed to a robust dataset. With this high-quality dataset, we were able to achieve satisfactory results using relatively straightforward algorithms. Moving forward, there is potential for further optimization, particularly in improving the recognition of more subtle gestures. You can check the Jupyter Notebook attached to see more detailed information.
Testing and results #
Right before the demo day, we managed to break the glove (the sewn traces were not connected/conductive anymore), and we had to use some cables between the sensors and the ESP32 board. This affected the performance of the glove and the feedback we received.

We received in total 6 feedback forms. It seems like overall the feedback was positive, but there is still room for improvement.
The weakest point of our glove was that it was not so easy to put on. As the glove is wireless, and no cables are going between the glove and a computer, we had to incorporate the board, battery and PCB with resistors into the globe design by making a box. During the demo session we were helping out people to put on the glove with a box so save some time so more people can try it out. This resulted in negative feedback, and for it, it is also a serious wearability concern.
The solution to this could have been a “soft box” design, and maybe a custom-made glove that has a longer “sleeve”, so the box can be attached to the glove. We believe that if the design included only one component, then it would be easier to put on. However, we put an effort into ensuring that everyone could use the glove -> the glove itself was stretchable and the wristband was adjustable for a lot of hand sizes. If this was not the requirement, we would have more room to experiment with the design.
Most of the positive feedback was about the design of the glove - that there were not too many cables, that everything was hidden in the box, and how the sensors were modular, so if one of them is not performing well, we can exchange it. The other positive feedback mentioned a few times was the incorporation of wireless technology into our design.
As one comment mentioned, we could improve our glove by adding continuous recognition, not on-demand gesture recognition. This was a good point, and we should incorporate it into our design. Other comments mentioned that we should be able to detect more gestures. This problem was caused by only having three functioning bend sensors for the demo session, which is now fixed by having the outsourced ADC.